The amount of techniques for processing satellite imagery can be overwhelming. This tutorial is an attempt to summarize and organize some of these techniques, focusing on those available in R (either native or via third-party software, e. g. SAGA). Other methods are also described, but just enough to provide some context and consider the strengths and limitations in using R as a tool for pre-processing satellite images.
The topic of this tutorial is pansharpening , i.e. a group of techniques that increase the spatial and spectral resolution of satellite images by fusing images with complementary features. The tutorial gives:
An introduction to the basics of pansharpening (Section 1),
An overview of methods (Section 2), and
Descriptions of some methods (Section 3), including a brief theoretical explanations and simple and reproducible examples.
To follow the basics of this tutorial, download this repository and make it your working directory. Also, you need to install RStoolbox (Leutner et al., 2019), tmap (Tennekes, 2018) and raster (Hijamns 2020). If not installed already, type:
install.packages("raster")
install.packages("RStoolbox")
install.packages("tmap")
Make sure to load them in working environment running:
library("raster")
library("RStoolbox")
library("tmap")
For this tutorial, we built the compare_vis() function, which can be found in the repository. The function has two inputs, the original (ms.img attribute) and the pansharpened multi-spectral images (pan.img attribute). compare_vis() is just a wrapper function of tmap instructions that display side-by-side the original and the pansharpened image. One way to load this function into R is as follows:
source("./R/compare_vis.R")
Pansharpening methods generally fuse a panchromatic and a multi-spectral (or hyper-spectral) image:
Panchromatic image of Pamplona (Navarra, Spain). The image was captured by Landsat-8 on June 30th, 2015.
A panchromatic image (PAN) is an image showing more details at the spatial level (high spatial resolution). Instruments retrieving PAN images sense broader bandwidths (visible and to some degree near-infrared) to increase the signal-to-noise ratio and capture finer spatial details. This also means that panchromatic images have a single band and they do not distinguish the intensities from various wavelengths (low spectral resolution). Here, we use the PAN image in the folder ./Data/small_sample, with a resolution of \(15 \times 15m^2\) :
# load
pan.file <- "./Data/small_sample/pan_example.tif"
pan.img <- raster(pan.file)RGB representation of a multispectral image of Pamplona (Navarra, Spain). The image was captured by Landsat-8 on June 30th 2015
Multi-spectral (MS) images carry information about the radiance/reflectance at narrower wavelengths. The difference between multi and hyper-spectral lays on the number of bands (tens vs. hundreds) and how narrow the bandwidths are. Our multi-spectral image has 4 bands; red, green, blue, and near-infrared (NIR). The image can be found under ./Data/small_sample and has a resolution of \(30 \times 30m^2\) :
# load
ms.file <- "./Data/small_sample/ms_example.tif"
ms.img <- stack(ms.file)These PAN and MS images were captured by Landsat-8, on June \(30^{th} 2015\). They both belong to the Collection 1 and processing Level 1. In Landsat-8, the PAN image comes as an additional band of the multi-spectral image (band 8).
Before pansharpening, the literature recommends to co-register the PAN and MS images so they are properly aligned. This is true even when if they belong to the same platform (Wegmann et al., 2016). The function coregisterImages() from RStoolbox applies a method. which basically consist in shifting a slave image along the vertical and horizontal axes over a reference image to maximize the mutual information. This function demands both master and slave to have the same number of bands.
Here, the co-registration is probably not necessary, as images come from the same instrument. However, we run an informal test with a single band from MS and the PAN, as mere illustration of the function:
# shift from -10 to +10 pixels horizontally and vertically
img.shift <- coregisterImages(ms.img[[1]],
pan.img,
shift=10,
reportStats=TRUE)
img.shift$bestShift
## x y
## 221 0 0
As the best shift is \(0,0\), we interpret that PAN and MS images are well aligned along both axes.
Pansharpening methods are classified in two main categories (Alparone et al., 2015):
Component Substitution (CS) methods: These methods project the multi-spectral image into a new vector space. The new vector space disentangles the spatial structure from the spectral information. The component representing the spatial structure is replaced by the PAN image. Finally, the image is projected back to the original vector space. Among the most popular CS techniques are the Intensity-Hue-Saturation method (IHS), the Principal Component Analysis (PCA), and the Gram-Schmidt (GS) spectral sharpening.
Multi-Resolution Analyses (MRA): This group of methods extract the spatial structure from the PAN image through multi-resolution decomposition. The spatial details are then injected into an interpolated version of the multi-spectral image. The most popular are the (decimated) Discrete Wavelet Transform (DWT), Undecimated Wavelet Transform (UDWT), A-Trous Wavelet Transform (ATW), and the Laplacian Pyramid (LP).
A third group combines both CS and MRA, and thus they are called hybrid. Current software in (or linked to) R covers some CS methods. They provide a high geometrical quality and they are simple and computationally fast. However, they do not consider potential discrepancies between the PAN and the MS image, leading, in some circumstances, to spectral distortions. They should be used with care (or avoided) for the analysis of spectral signatures (Alparone et al., 2015; Wegmann et al., 2016).
Mains steps in CS sharpening are: (1) interpolate the MS image to match the scale of the PAN image, (2) define and apply spectral weights to each band to obtain the intensity image (i.e., an image representing the darkness-lightness of colors), (3) match the histograms of the PAN and the intensity image, so they show similar frequencies for each intensity value, (4) calculate and sum the injection gains to the original image (i.e. the proportional part that goes to each band of the original image). Variations of CS differ in the definition of the spectral weights in step 2 and the calculation of the injection gains in step 4.
The RStoolbox package implements three variants of the CS (Leutner et al., 2019) with the panSharpen() function; the Brovey Transform (BT), the Intensity-Hue-Saturation (IHS), and the Principal Component Analysis (PCA). Another alternative is using the image processing module from SAGA , which additionally includes the Spectral Sharpening. We will check how SAGA may boost the computational performance of pansharpening when dealing with large images. Due to the need of third-party software and the methodological overlap between RStoolbox and SAGA, we relegate its use to the end of this tutorial.
BT (Gillespie et al., 1987) is one of the simplest pansharpening techniques. BT applies ratios between the bands of the MS and the PAN to obtain the sharpened image. More specifically, panSharpen() first interpolates the MS image (\(\tilde{MS}\)) to match the resolution of the PAN image using nearest neighbors, then calculates the intensity image as the mean of all bands (\(I = \frac{1}{N} \sum_{k = 1}^{N}{MS_k}\)), and finally injects the details to the original image by multiplying ratio between the PAN and intensity image
\[MS_{PAN,k}= \widetilde{MS}_k \times \frac{PAN}{I}\]
The Brovey transform can be applied setting the argument method = "brovey". BT only deals with the Red-Green-Blue (RGB) bands, so we must specify which layer in the MS image corresponds to R, G, and B. The red, green, and blue bands in ms.img are the 1st, 2nd, and 3rd layers respectively. Thus, we run:
library("RStoolbox")
bro.img <- panSharpen(ms.img, pan.img,r=1,g=2,b=3,method="brovey")
We can visually compare the original and the sharpened image with the function compare_vis() :
compare_vis(bro.img, ms.img, titles = c("Brovey", "Original"))
Go around and explore yourself the difference in spatial detail and color balance between the pansharpened image (left) and the original image (right).